Click Here!
home account info subscribe login search My ITKnowledge FAQ/help site map contact us


 
Brief Full
 Advanced
      Search
 Search Tips
To access the contents, click the chapter and section titles.

Oracle Performance Tuning and Optimization
(Publisher: Macmillan Computer Publishing)
Author(s): Edward Whalen
ISBN: 067230886x
Publication Date: 04/01/96

Bookmark It

Search this book:
 
Previous Table of Contents Next


Review of Striping Options

Whether you use Oracle striping, OS striping, or hardware striping, the goal is the same: distribute the random I/Os across as many disks as possible. In this way, you can keep the number of I/Os per second requested within the bounds of the physical disks.

If you use Oracle striping or OS striping, you can usually monitor the performance of each disk individually to see how hard they are being driven. If you use hardware striping, remember that the OS monitoring facilities typically see the disk volume as one logical disk. You can easily determine how hard the disks are being driven by dividing the I/O rate by the number of drives.

With hardware and OS striping, the stripes are small enough that the I/Os are usually divided among the drives fairly evenly. Be sure to monitor the drives periodically to verify that you are not up against I/O limits.

Use this formula to calculate the I/O rate per drive:

I/Os per disk = ( Number of I/Os per second per volume ) / (Number of drives in the
volume)

Suppose that you have a disk array with four drives generating 120 I/Os per second. The number of I/Os per second per disk is calculated as follows:

I/Os per disk = 120 / 4 = 30 I/Os per second per disk

For data volumes that are accessed randomly, you don’t want to push the disks past 50 to 60 I/Os per second per disk. To estimate how many disks you need for data volumes, use this formula:

Number of disks = I/Os per second needed / 60 I/Os per second per disk

If your application requires a certain data file to supply 500 I/Os per second (based on analysis and calculations), you can estimate the number of disk drives needed as follows:

Number of disks = 500 I/Os per second / 60 I/Os per second per disk = 16 2/3 disks
or 17 disks

This calculation gives you a good approximation for how large to build the data volumes with no fault tolerance. Chapter 15, “Disk Arrays,” looks at various RAID levels and additional I/Os that are incurred by adding disk fault tolerance.

Separate Data and Indexes

Another way to reduce disk contention is to separate the data files from their associated indexes. Remember that disk contention is caused by multiple processes trying to obtain the same resources. For a particularly “hot” table with data that many processes try to access, the indexes associated with that data will be “hot” also.

Placing the data files and index files on different disks reduces the contention on particularly hot tables. Distributing the files also allows more concurrency by allowing simultaneous accesses to the data files and the indexes. Look at the Oracle dynamic performance tables to determine which tables and indexes are the most active.

Eliminate Non-Oracle Disk I/Os

Although it is not necessary to eliminate all non-Oracle I/Os, reducing significant I/Os will help performance. Most systems are tuned to handle a specific throughput requirement or response time requirement. Any additional I/Os that slow down Oracle can affect both these requirements.

Another reason to reduce non-Oracle I/Os is to increase the accuracy of the Oracle dynamic performance table, V$FILESTAT. If only Oracle files are on the disks you are monitoring, the statistics in this table should be very accurate.

Reducing Unnecessary I/O Overhead

Reducing unnecessary I/O overhead can increase the throughput available for user tasks. Unnecessary overhead such as chaining and migrating of rows hurts performance.

Migrating and chaining occur when an UPDATE statement increases the size of a row so that it no longer fits in the data block. When this happens, Oracle tries to find space for this new row. If a block is available with enough room, Oracle moves the entire row to that new block. This is called migrating. If no data block is available with enough space, Oracle splits the row into multiple pieces and stores them in several data blocks. This is called chaining.

Migrated and Chained Rows

Migrated rows cause overhead in the system because Oracle must spend the CPU time to find space for the row and then copy the row to the new data block. This takes both CPU time and I/Os. Therefore, any UPDATE statement that causes a migration incurs a performance penalty.

Chained rows cause overhead in the system not only when they are created but each time they are accessed. A chained row requires more than one I/O to read the row. Remember that Oracle reads from the disk data blocks; each time the row is accessed, multiple blocks must be read into the SGA.

You can check for chained rows with the LIST CHAINED ROWS option of the ANALYZE command. You can use these SQL statements to check for chained or migrated rows:

SQL> Rem
SQL> CREATE TABLE chained_rows (
  2  owner_name    varchar2(30),
  3  table_name    varchar2(30),
  4  cluster_name  varchar2(30),
  5  head_rowid    rowid,
  6  timestamp     date);

Table created.

SQL> Rem
SQL> Rem Analyze the Table in Question
SQL> Rem
SQL> ANALYZE
  2   TABLE scott.emp LIST CHAINED ROWS;

Table analyzed.

SQL> Rem
SQL> Rem Check the Results
SQL> Rem
SQL> SELECT * from chained_rows;

no rows selected

If any rows are selected, you have either chained or migrated rows. To solve the problem of migrated rows, copy the rows in question to a temporary table, delete the rows from the initial table, and reinsert the rows into the original table from the temporary table.

Run the chained-row command again to show only chained rows. If you see an abundance of chained rows, this is an indication that the Oracle database block size is too small. You may want to export the data and rebuild the database with a larger block size.

You may not be able to avoid having chained rows, especially if your table has a LONG column or long CHAR or VARCHAR2 columns. If you are aware of very large columns, it can be advantageous to adjust the database block size before implementing the database.

A properly sized block ensures that the blocks are used efficiently and I/Os are kept to a minimum. Don’t over-build the blocks or you may end up wasting space. The block size is determined by the Oracle parameter DB_BLOCK_SIZE. Remember that the amount of memory used for database block buffers is calculated as follows:

Memory used = DB_BLOCK_BUFFERS (number) * DB_BLOCK_SIZE (bytes)

Be careful to avoid paging or swapping caused by an SGA that doesn’t fit into RAM.


Previous Table of Contents Next


Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc.
All rights reserved. Reproduction whole or in part in any form or medium without express written permission of EarthWeb is prohibited.